6 research outputs found
Recommended from our members
Blurring the Line Between Human and Machine: Marketing Artificial Intelligence
One of the most prominent and potentially transformative trends in society today is machines becoming more human-like, driven by progress in artificial intelligence. How this trend will impact individuals, private and public organizations, and society as a whole is still unknown, and depends largely on how individual consumers choose to adopt and use these technologies. This dissertation focuses on understanding how consumers perceive, adopt, and use technologies that blur the line between human and machine, with two primary goals. First, I build on psychological and philosophical theories of mind perception, anthropomorphism, and dehumanization, and on management research into technology adoption, in order to develop a theoretical understanding of the forces that shape consumer adoption of these technologies. Second, I develop practical marketing interventions that can be used to influence patterns of adoption according to the desired outcome.
This dissertation is organized as follows. Essay 1 develops a conceptual framework for understanding what AI is, what it can do, and what are some of the key antecedents and consequences of its’ adoption. The subsequent two Essays test various parts of this framework. Essay 2 explores consumers’ willingness to use algorithms to perform tasks normally done by humans, focusing specifically on how the nature of the task for which algorithms are used and the human-likeness of the algorithm itself impact consumers’ use of the algorithm. Essay 3 focuses on the use of social robots in consumption contexts, specifically addressing the role of robots’ physical and mental human-likeness in shaping consumers’ comfort with and perceived usefulness of such robots.
Together, these three Essays offer an empirically supported conceptual structure ¬for marketing researchers and practitioners to understand artificial intelligence and influence the processes through which consumers perceive and adopt it. Artificial intelligence has the potential to create enormous value for consumers, firms, and society, but also poses many profound challenges and risks. A better understanding of how this transformative technology is perceived and used can potentially help to maximize its potential value and minimize its risks
Decisional enhancement and autonomy: public attitudes towards overt and covert nudges
Ubiquitous cognitive biases hinder optimal decision making. Recent calls to assist decision makers in mitigating these biases—via interventions commonly called “nudges”—have been criticized as infringing upon individual autonomy. We tested the hypothesis that such “decisional enhancement” programs that target overt decision making—i.e., conscious, higher-order cognitive processes—would be more acceptable than similar programs that affect covert decision making—i.e., subconscious, lower-order processes. We presented respondents with vignettes in which they chose between an option that included a decisional enhancement program and a neutral option. In order to assess preferences for overt or covert decisional enhancement, we used the contrastive vignette technique in which different groups of respondents were presented with one of a pair of vignettes that targeted either conscious or subconscious processes. Other than the nature of the decisional enhancement, the vignettes were identical, allowing us to isolate the influence of the type of decisional enhancement on preferences. Overall, we found support for the hypothesis that people prefer conscious decisional enhancement. Further, respondents who perceived the influence of the program as more conscious than subconscious reported that their decisions under the program would be more “authentic”. However, this relative favorability was somewhat contingent upon context. We discuss our results with respect to the implementation and ethics of decisional enhancement
Cyborg Consumers: When Human Enhancement Technologies Are Dehumanizing
New technologies are providing unprecedented opportunities for consumers to enhance their bodies and minds, including traits typically seen as comprising "humanness." We show that such enhancements can be dehumanizing, and explore how the perceived naturalness of the means and outcome of enhancement can explain this technological dehumanization
Decisional enhancement and autonomy: public attitudes towards overt and covert nudges
Ubiquitous cognitive biases hinder optimal decision making. Recent calls to assist decision makers in mitigating these biases---via interventions commonly called ``nudges''---have been criticized as infringing upon individual autonomy. We tested the hypothesis that such ``decisional enhancement'' programs that target overt decision making---i.e., conscious, higher-order cognitive processes---would be more acceptable than similar programs that affect covert decision making---i.e., subconscious, lower-order processes. We presented respondents with vignettes in which they chose between an option that included a decisional enhancement program and a neutral option. In order to assess preferences for overt or covert decisional enhancement, we used the contrastive vignette technique in which different groups of respondents were presented with one of a pair of vignettes that targeted either conscious or subconscious processes. Other than the nature of the decisional enhancement, the vignettes were identical, allowing us to isolate the influence of the type of decisional enhancement on preferences. Overall, we found support for the hypothesis that people prefer conscious decisional enhancement. Further, respondents who perceived the influence of the program as more conscious than subconscious reported that their decisions under the program would be more ``authentic''. However, this relative favorability was somewhat contingent upon context. We discuss our results with respect to the implementation and ethics of decisional enhancement
Understanding and Improving Consumer Reactions to Service Bots
Many firms are beginning to replace customer service employees with bots, from humanoid service robots to digital chatbots. Using real human-bot interactions in lab and field settings, we study consumers’ evaluations of bot-provided service. We find that service evaluations are more negative when the service provider is a bot versus a human—even when the provided service is identical. This effect is explained by consumers’ belief that service automation is motivated by firm benefits (i.e., cutting costs) at the expense of customer benefits (such as service quality). The effect is eliminated when firms share the economic surplus derived from automation with consumers through price discounts. The effect is reversed when service bots provide unambiguously superior service to human employees—a scenario that may soon become reality. Consumers’ default reactions to service bots are therefore largely negative but can be equal to or better than reactions to human service providers if firms can demonstrate how automation benefits consumers